197 research outputs found

    Deterministic and stochastic methods for gaze tracking in real-time

    Get PDF
    Presentado al 12th International Conference on Computer Analysis of Images and Patterns (CAIP-2007) celebrado en Viena (Austria) del 27 al 29 de agosto.Psychological evidence demonstrates how eye gaze analysis is requested for human computer interaction endowed with emotion recognition capabilities. The existing proposals analyse eyelid and iris motion by using colour information and edge detectors, but eye movements are quite fast and difficult for precise and robust tracking. Instead, we propose to reduce the dimensionality of the image-data by using multi-Gaussian modelling and transition estimations by applying partial differences. The tracking system can handle illumination changes, low-image resolution and occlusions while estimating eyelid and iris movements as continuous variables. Therefore, this is an accurate and robust tracking system for eyelids and irises in 3D for standard image quality.This work is supported by EC grants IST-027110 for the HERMES project, IST-045547 for the VIDI-Video project, by the Spanish MEC under projects TIN2006-14606 and DPI-2004-5414. Jordi Gonzàlez also acknowledges the support of a Juan de la Cierva Postdoctoral fellowship from the Spanish MEC.Peer Reviewe

    Interpretation of complex situations in a semantic-based surveillance framework

    Get PDF
    The integration of cognitive capabilities in computer vision systems requires both to enable high semantic expressiveness and to deal with high computational costs as large amounts of data are involved in the analysis. This contribution describes a cognitive vision system conceived to automatically provide high-level interpretations of complex real-time situations in outdoor and indoor scenarios, and to eventually maintain communication with casual end users in multiple languages. The main contributions are: (i) the design of an integrative multilevel architecture for cognitive surveillance purposes; (ii) the proposal of a coherent taxonomy of knowledge to guide the process of interpretation, which leads to the conception of a situation-based ontology; (iii) the use of situational analysis for content detection and a progressive interpretation of semantically rich scenes, by managing incomplete or uncertain knowledge, and (iv) the use of such an ontological background to enable multilingual capabilities and advanced end-user interfaces. Experimental results are provided to show the feasibility of the proposed approach.This work was supported by the project 'CONSOLIDER-INGENIO 2010 Multimodal interaction in pattern recognition and computer vision' (V-00069). This work is supported by EC Grants IST-027110 for the HERMES project and IST-045547 for the VIDI-video project, and by the Spanish MEC under Projects TIN2006-14606 and CONSOLIDER-INGENIO 2010 (CSD2007-00018). Jordi Gonzàlez also acknowledges the support of a Juan de la Cierva Postdoctoral fellowship from the Spanish MEC.Peer Reviewe

    Interpretation of human motion in image sequences using situation graph trees

    Get PDF
    CVC Workshop on Computer Vision: Advances in Research & Development (CVCRD), 2006, Bellaterra (Spain)Evaluation of human behaviour patterns in determined scenes has been a problem studied in social and cognitive sciences, but now it is raised as a challenging approach to computer science due to the complexity of data extraction and its analysis. Results obtained in this research will be helpful in cognitive sciences, above all in the human-computer interaction and the video surveillance domain. Our information source is an image sequence previously processed with pattern recognition algorithms, to extract quantitative data of the trajectories performed by the agents within the scene. Reasoning about human behavior makes necessary the inclusion of machine learning techniques, in order to represent those behaviours in a qualitative manner, allowing natural language explanation of the scene. This is achieved by means of a rule-based inference engine called F-Limette and a behaviour modelling tool based on Situation Graph Trees. The success of this approach depends on the precision of the image analysis system, the selection of suitable reasoning tools and the design of useful behaviour models. The model was tested in a street scene and the agents of interest were pedestrians. Textual descritpions are generated which qualitatively describe the observed behavior. Experimental results are provided by defining three different behaviors in a pedestrian crossing. This will allow us to confront sociological theories about human behaviour, whose quantitative base is at present being computed from statistics and not from semantic concepts.This work was supported by the project 'Integration of robust perception, learning, and navigation systems in mobile robotics' (J-0929).Peer Reviewe

    Semantic annotation of complex human scenes for multimedia surveillance

    Get PDF
    Presentado al 10th Congress of the Italian Association for Artificial Intelligence (AI*IA-2007) celebrado en Roma (Italia) del 10 al 13 de septiembre.A Multimedia Surveillance System (MSS) is considered for automatically retrieving semantic content from complex outdoor scenes, involving both human behavior and traffic domains. To characterize the dynamic information attached to detected objects, we consider a deterministic modeling of spatio-temporal features based on abstraction processes towards fuzzy logic formalism. A situational analysis over conceptualized information will not only allow us to describe human actions within a scene, but also to suggest possible interpretations of the behaviors perceived, such as situations involving thefts or dangers of running over. Towards this end, the different levels of semantic knowledge implied throughout the process are also classified into a proposed taxonomy.This work has been supported by EC grant IST-027110 for the HERMES project and by the Spanish MEC under projects TIC-2003-08865 and DPI -2004-5414. Jordi Gonz`alez also acknowledges the support of a Juan de la Cierva Postdoctoral fellowship from the Spanish MEC.Peer Reviewe

    3D human motion sequences synchronization using dense matching algorithm

    Get PDF
    Annual Symposium of the German Association for Pattern Recognition (DAGM), 2006, Berlin (Germany)This work solves the problem of synchronizing pre-recorded human motion sequences, which show different speeds and accelerations, by using a novel dense matching algorithm. The approach is based on the dynamic programming principle that allows finding an optimal solution very fast. Additionally, an optimal sequence is automatically selected from the input data set to be a time scale pattern for all other sequences. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.This work was supported by the project 'Integration of robust perception, learning, and navigation systems in mobile robotics' (J-0929).Peer Reviewe

    Automatic generation of computer animated sequences based on human behaviour modelling

    Get PDF
    Presentado a la International Conference in Computer Graphics and Artificial Intelligence (3IA), 2007, Athens (Greece).This paper presents a complete framework to automatically generate synthetic image sequences by designing and simulating complex human behaviors in virtual environments. Given an initial state of a virtual agent, a simulation process generates posterior synthetic states by means of precomputed human motion and behavior models, taking into account the relationships of such an agent w.r.t its environment at each frame step. The resulting status sequence is further visualized into a virtual scene using a 3D graphic engine. Conceptual knowledge about human behavior patterns is represented using the Situation Graph Tree formalism and a rule-based inference system called F-Limette. Results obtained are very helpful for testing human interaction with real environments, such as a pedestrian crossing scenario, and for virtual storytelling, to automatically generate animated sequences.This work was supported by the project 'Integration of robust perception, learning, and navigation systems in mobile robotics' (J-0929).This work has been supported by EC grants IST-027110 for the HERMES project and IST-045547 for the VIDI-Video project, and by the Spanish MEC under projects TIN2006-14606 and DPI-2004-5414. Jordi Gonzàlez also acknowledges the support of a Juan de la Cierva Postdoctoral fellowship from the Spanish MEC.Peer Reviewe

    Moving cast shadows detection methods for video surveillance applications

    Get PDF
    Moving cast shadows are a major concern in today’s performance from broad range of many vision-based surveillance applications because they highly difficult the object classification task. Several shadow detection methods have been reported in the literature during the last years. They are mainly divided into two domains. One usually works with static images, whereas the second one uses image sequences, namely video content. In spite of the fact that both cases can be analogously analyzed, there is a difference in the application field. The first case, shadow detection methods can be exploited in order to obtain additional geometric and semantic cues about shape and position of its casting object (’shape from shadows’) as well as the localization of the light source. While in the second one, the main purpose is usually change detection, scene matching or surveillance (usually in a background subtraction context). Shadows can in fact modify in a negative way the shape and color of the target object and therefore affect the performance of scene analysis and interpretation in many applications. This chapter wills mainly reviews shadow detection methods as well as their taxonomies related with the second case, thus aiming at those shadows which are associated with moving objects (moving shadows).Peer Reviewe
    corecore